Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: lint all md files, add docs #45

Merged

Conversation

cyril-corbon
Copy link
Collaborator

@cyril-corbon cyril-corbon commented Apr 21, 2023

Description

  • harmonize namespace to druid-operator
  • remove references to old operator version
  • format md file
  • set user / group / fsuser or the operator crash in 1.26

This PR has:

  • been tested on a real K8S cluster to ensure creation of a brand new Druid cluster works.
  • been tested for backward compatibility on a real K*S cluster by applying the changes introduced here on an existing Druid cluster. If there are any backward incompatible changes then they have been noted in the PR description.
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added documentation for new or modified features or behaviors.

Key changed/added files in this PR
  • MyFoo
  • OurBar
  • TheirBaz

@AdheipSingh
Copy link
Contributor

Thanks @cyril-corbon LGTM.

@itamar-marom do cross-review this PR, thanks.

chart/values.yaml Outdated Show resolved Hide resolved
@itamar-marom
Copy link
Collaborator

@cyril-corbon why did the manager flags change? is it a Kubebuilder change or a custom change of ours? I want to avoid any unnecessary customizations we have (or operator SDK code).

Regarding namespace change - It's tricky since Kubebuilder generates the namespace as "druid-operator-system". That means that any time we will re-generate the CRDs we will need to take care of it. BTW this is also a customization that separates us from Kubebuilder best practices.

@AdheipSingh
Copy link
Contributor

BTW i did build the current master before cutting v1.1.0 , the charts LGTM

@cyril-corbon
Copy link
Collaborator Author

@cyril-corbon why did the manager flags change? is it a Kubebuilder change or a custom change of ours? I want to avoid any unnecessary customizations we have (or operator SDK code).

Regarding namespace change - It's tricky since Kubebuilder generates the namespace as "druid-operator-system". That means that any time we will re-generate the CRDs we will need to take care of it. BTW this is also a customization that separates us from Kubebuilder best practices.

for the flag it was due to a miss configuration on my side, I'll revert this change.
for the namespace should we set druid-operator-system for both make deploy and make helm-install-druid-operator ?

@AdheipSingh
Copy link
Contributor

for the namespace should we set druid-operator-system for both make deploy and make helm-install-druid-operator

SGTM

docs/features.md Outdated
- ```NOTE: User must be aware of this feature, there might be cases where crashloopback might be caused due probe failure, fault image etc, the operator shall keep on deleting on each re-concile loop. Default Behavior is True ```

## Scaling of Druid Nodes
- Operator supports ```HPA autosaling/v2beta2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.

- Operator supports ```HPA autosaling/v2beta2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Operator supports ```HPA autosaling/v2beta2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.
- Operator supports ```HPA autosaling/v2``` Spec in the nodeSpec for druid nodes. In case HPA deployed, HPA controller maintains the replica count/state for the particular statefulset referenced. Refer to ```examples.md``` for HPA configuration.

Signed-off-by: Cyril Corbon <cyril.corbon@dailymotion.com>
@itamar-marom
Copy link
Collaborator

@cyril-corbon @AdheipSingh why aren't we merging this?

Comment on lines 48 to 52
runAsNonRoot: true
fsGroup: 65532
runAsUser: 65532
runAsGroup: 65532

Copy link
Contributor

@AdheipSingh AdheipSingh Jul 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this ? can we know the exact reason

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it was to match the user / group defined in the Dockerfile of the Operator here: https://github.com/cyril-corbon/druid-operator/blob/master/Dockerfile#L31
I also note this :
set user / group / fsuser or the operator crash in 1.26
but I cannot reproduce it anymore

Copy link
Collaborator Author

@cyril-corbon cyril-corbon Jul 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO It's a good idea to set theses values by default.
both containers of the pod have the same uid/gid so it should not be an issue to set this at pod level.

docker history gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
IMAGE          CREATED        CREATED BY                                      SIZE      COMMENT
60cb20b77a0f   9 months ago   /bin/sh -c #(nop)  ENTRYPOINT ["/usr/local/b…   0B
<missing>      9 months ago   /bin/sh -c #(nop)  USER 65532:65532             0B

I can remove it if you want or if you think it's not useful 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense.
its only related to kube rbac proxy container right ? not the druid-operator ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's related to both containers. its not an issue as they share the same uid / gid

@itamar-marom
Copy link
Collaborator

@AdheipSingh @cyril-corbon Are we merging it? After that, we can release the new version we wanted

@AdheipSingh AdheipSingh merged commit 65138b0 into datainfrahq:master Jul 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants